Goto

Collaborating Authors

 health professional


It's Causing People to Lose Jobs, Shatter Relationships, and Drain Their Savings. One Support Group Is Sounding the Alarm.

Slate

A.I.-related psychosis has cost people their marriages, life savings, and grip on reality. Last August, Adam Thomas found himself wandering the dunes of Christmas Valley, Oregon, after a chatbot kept suggesting he mystically "follow the pattern" of his own consciousness. Thomas was running on very little sleep--he'd been talking to his chatbot around the clock for months by that point, asking it to help improve his life. Instead it sent him on empty assignments, like meandering the vacuous desert sprawl. He'd lost his job as a funeral director and was living out of a van, draining his savings, and now he found himself stranded in the desert. When he woke up outside on a stranger's futon with no money to his name, he knew he'd hit rock bottom. "I wasn't aware of the dangers at the time, and I thought that the A.I. had statistical analysis abilities that would allow it to assist me if I opened up about my life," Thomas told me.


Comparing Knowledge Source Integration Methods for Optimizing Healthcare Knowledge Fusion in Rescue Operation

Nadeem, Mubaris, Fathi, Madjid

arXiv.org Artificial Intelligence

In the field of medicine and healthcare, the utilization of medical expertise, based on medical knowledge combined with patients' health information is a life-critical challenge for patients and health professionals. The within-laying complexity and variety form the need for a united approach to gather, analyze, and utilize existing knowledge of medical treatments, and medical operations to provide the ability to present knowledge for the means of accurate patient-driven decision-making. One way to achieve this is the fusion of multiple knowledge sources in healthcare. It provides health professionals the opportunity to select from multiple contextual aligned knowledge sources which enables the support for critical decisions. This paper presents multiple conceptual models for knowledge fusion in the field of medicine, based on a knowledge graph structure. It will evaluate, how knowledge fusion can be enabled and presents how to integrate various knowledge sources into the knowledge graph for rescue operations.


KIRETT: Smart Integration of Vital Signs Data for Intelligent Decision Support in Rescue Scenarios

Nadeem, Mubaris, Zenkert, Johannes, Weber, Christian, Bender, Lisa, Fathi, Madjid

arXiv.org Artificial Intelligence

The integration of vital signs in healthcare has witnessed a steady rise, promising health professionals to assist in their daily tasks to improve patient treatment. In life-threatening situations, like rescue operations, crucial decisions need to be made in the shortest possible amount of time to ensure that excellent treatment is provided during life-saving measurements. The integration of vital signs in the treatment holds the potential to improve time utilization for rescuers in such critical situations. They furthermore serve to support health professionals during the treatment with useful information and suggestions. To achieve such a goal, the KIRETT project serves to provide treatment recommendations and situation detection, combined on a wrist-worn wearable for rescue operations.This paper aims to present the significant role of vital signs in the improvement of decision-making during rescue operations and show their impact on health professionals and patients in need.


Scientists confirm woke change made to Barbie over the course of 35 years - so did you notice it?

Daily Mail - Science & tech

Barbie is one of the most successful children's toys in history, spawning a multimedia franchise that includes merchandise, video games and a live-action film. Since US toy giant Mattel launched the original Barbie in 1959, more than 1 billion of the dolls have been sold worldwide. Certainly, Barbie's looks have been tweaked over the years to reflect changing beauty ideals and societal shifts. But according to a new study, one subtle change to Barbie has gone largely unnoticed – until now. Scientists in Australia have found that Barbies today have flatter feet than they did in past decades.


When Testing AI Tests Us: Safeguarding Mental Health on the Digital Frontlines

Pendse, Sachin R., Gergle, Darren, Kornfield, Rachel, Meyerhoff, Jonah, Mohr, David, Suh, Jina, Wescott, Annie, Williams, Casey, Schleider, Jessica

arXiv.org Artificial Intelligence

Red-teaming is a core part of the infrastructure that ensures that AI models do not produce harmful content. Unlike past technologies, the black box nature of generative AI systems necessitates a uniquely interactional mode of testing, one in which individuals on red teams actively interact with the system, leveraging natural language to simulate malicious actors and solicit harmful outputs. This interactional labor done by red teams can result in mental health harms that are uniquely tied to the adversarial engagement strategies necessary to effectively red team. The importance of ensuring that generative AI models do not propagate societal or individual harm is widely recognized -- one less visible foundation of end-to-end AI safety is also the protection of the mental health and wellbeing of those who work to keep model outputs safe. In this paper, we argue that the unmet mental health needs of AI red-teamers is a critical workplace safety concern. Through analyzing the unique mental health impacts associated with the labor done by red teams, we propose potential individual and organizational strategies that could be used to meet these needs, and safeguard the mental health of red-teamers. We develop our proposed strategies through drawing parallels between common red-teaming practices and interactional labor common to other professions (including actors, mental health professionals, conflict photographers, and content moderators), describing how individuals and organizations within these professional spaces safeguard their mental health given similar psychological demands. Drawing on these protective practices, we describe how safeguards could be adapted for the distinct mental health challenges experienced by red teaming organizations as they mitigate emerging technological risks on the new digital frontlines.


"It's a conversation, not a quiz": A Risk Taxonomy and Reflection Tool for LLM Adoption in Public Health

Zhou, Jiawei, Chen, Amy Z., Shah, Darshi, Reese, Laura Schwab, De Choudhury, Munmun

arXiv.org Artificial Intelligence

Recent breakthroughs in large language models (LLMs) have generated both interest and concern about their potential adoption as accessible information sources or communication tools across different domains. In public health -- where stakes are high and impacts extend across populations -- adopting LLMs poses unique challenges that require thorough evaluation. However, structured approaches for assessing potential risks in public health remain under-explored. To address this gap, we conducted focus groups with health professionals and health issue experiencers to unpack their concerns, situated across three distinct and critical public health issues that demand high-quality information: vaccines, opioid use disorder, and intimate partner violence. We synthesize participants' perspectives into a risk taxonomy, distinguishing and contextualizing the potential harms LLMs may introduce when positioned alongside traditional health communication. This taxonomy highlights four dimensions of risk in individual behaviors, human-centered care, information ecosystem, and technology accountability. For each dimension, we discuss specific risks and example reflection questions to help practitioners adopt a risk-reflexive approach. This work offers a shared vocabulary and reflection tool for experts in both computing and public health to collaboratively anticipate, evaluate, and mitigate risks in deciding when to employ LLM capabilities (or not) and how to mitigate harm when they are used.


Interactive Example-based Explanations to Improve Health Professionals' Onboarding with AI for Human-AI Collaborative Decision Making

Lee, Min Hun, Ng, Renee Bao Xuan, Choo, Silvana Xinyi, Thilarajah, Shamala

arXiv.org Artificial Intelligence

A growing research explores the usage of AI explanations on user's decision phases for human-AI collaborative decision-making. However, previous studies found the issues of overreliance on `wrong' AI outputs. In this paper, we propose interactive example-based explanations to improve health professionals' onboarding with AI for their better reliance on AI during AI-assisted decision-making. We implemented an AI-based decision support system that utilizes a neural network to assess the quality of post-stroke survivors' exercises and interactive example-based explanations that systematically surface the nearest neighborhoods of a test/task sample from the training set of the AI model to assist users' onboarding with the AI model. To investigate the effect of interactive example-based explanations, we conducted a study with domain experts, health professionals to evaluate their performance and reliance on AI. Our interactive example-based explanations during onboarding assisted health professionals in having a better reliance on AI and making a higher ratio of making `right' decisions and a lower ratio of `wrong' decisions than providing only feature-based explanations during the decision-support phase. Our study discusses new challenges of assisting user's onboarding with AI for human-AI collaborative decision-making.


Improving Health Professionals' Onboarding with AI and XAI for Trustworthy Human-AI Collaborative Decision Making

Lee, Min Hun, Choo, Silvana Xin Yi, Thilarajah, Shamala D/O

arXiv.org Artificial Intelligence

With advanced AI/ML, there has been growing research on explainable AI (XAI) and studies on how humans interact with AI and XAI for effective human-AI collaborative decision-making. However, we still have a lack of understanding of how AI systems and XAI should be first presented to users without technical backgrounds. In this paper, we present the findings of semi-structured interviews with health professionals (n=12) and students (n=4) majoring in medicine and health to study how to improve onboarding with AI and XAI. For the interviews, we built upon human-AI interaction guidelines to create onboarding materials of an AI system for stroke rehabilitation assessment and AI explanations and introduce them to the participants. Our findings reveal that beyond presenting traditional performance metrics on AI, participants desired benchmark information, the practical benefits of AI, and interaction trials to better contextualize AI performance, and refine the objectives and performance of AI. Based on these findings, we highlight directions for improving onboarding with AI and XAI and human-AI collaborative decision-making.


Your Model Is Not Predicting Depression Well And That Is Why: A Case Study of PRIMATE Dataset

Milintsevich, Kirill, Sirts, Kairit, Dias, Gaël

arXiv.org Artificial Intelligence

This paper addresses the quality of annotations in mental health datasets used for NLP-based depression level estimation from social media texts. While previous research relies on social media-based datasets annotated with binary categories, i.e. depressed or non-depressed, recent datasets such as D2S and PRIMATE aim for nuanced annotations using PHQ-9 symptoms. However, most of these datasets rely on crowd workers without the domain knowledge for annotation. Focusing on the PRIMATE dataset, our study reveals concerns regarding annotation validity, particularly for the lack of interest or pleasure symptom. Through reannotation by a mental health professional, we introduce finer labels and textual spans as evidence, identifying a notable number of false positives. Our refined annotations, to be released under a Data Use Agreement, offer a higher-quality test set for anhedonia detection. This study underscores the necessity of addressing annotation quality issues in mental health datasets, advocating for improved methodologies to enhance NLP model reliability in mental health assessments.


VR Is Revolutionizing Therapy. Why Aren't More People Using It? - CNET

#artificialintelligence

But there's one thing that, as he puts it, scared the shit out of him: needles. His aversion was severe enough to hold him back from getting routine tests. Stokes, now 40, recalls an instance in his 20s when he simply couldn't bring himself to get a blood test. He once even drove to the testing facility to get his blood drawn, but couldn't follow through with it. His partner (now wife) eventually convinced him to get the test, but he remembers it as one of "the most horrific" experiences he's had. "I kind of passed out a little bit along the way, and was sweaty and clammy and all that sort of stuff," he said.